skip to main content


Search for: All records

Creators/Authors contains: "Jahanian, Mohammad"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. In many scenarios, information must be disseminated over intermittently-connected environments when the network infrastructure becomes unavailable, e.g., during disasters where first responders need to send updates about critical tasks. If such updates pertain to a shared data set, dissemination consistency is important. This can be achieved through causal ordering and consensus. Popular consensus algorithms, e.g., Paxos, are most suited for connected environments. While some work has been done on designing consensus algorithms for intermittently-connected environments, such as the One-Third Rule (OTR) algorithm, there is still need to improve their efficiency and timely completion. We propose CoNICE, a framework to ensure consistent dissemination of updates among users in intermittently-connected, infrastructure-less environments. It achieves efficiency by exploiting hierarchical namespaces for faster convergence, and lower communication overhead. CoNICE provides three levels of consistency to users, namely replication, causality and agreement. It uses epidemic propagation to provide adequate replication ratios, and optimizes and extends Vector Clocks to provide causality. To ensure agreement, CoNICE extends OTR to also support long-term network fragmentation and decision invalidation scenarios; we define local and global consensus pertaining to within and across fragments respectively. We integrate CoNICE's consistency preservation with a naming schema that follows a topic hierarchy-based dissemination framework, to improve functionality and performance. Using the Heard-Of model formalism, we prove CoNICE's consensus to be correct. Our technique extends previously established proof methods for consensus in asynchronous environments. Performing city-scale simulation, we demonstrate CoNICE's scalability in achieving consistency in convergence time, utilization of network resources, and reduced energy consumption. 
    more » « less
  2. Name-based pub/sub allows for efficient and timely delivery of information to interested subscribers. A challenge is assigning the right name to each piece of content, so that it reaches the most relevant recipients. An example scenario is the dissemination of social media posts to first responders during disasters. We present FLARE, a framework using federated active learning assisted by naming. FLARE integrates machine learning and name-based pub/sub for accurate timely delivery of textual information. In this demo, we show FLARE’s operation. 
    more » « less
  3. null (Ed.)
    During disasters, it is critical to deliver emergency information to appropriate first responders. Name-based information delivery provides efficient, timely dissemination of relevant content to first responder teams assigned to different incident response roles. People increasingly depend on social media for communicating vital information, using free-form text. Thus, a method that delivers these social media posts to the right first responders can significantly improve outcomes. In this paper, we propose FLARE, a framework using 'Social Media Engines' (SMEs) to map social media posts (SMPs), such as tweets, to the right names. SMEs perform natural language processing-based classification and exploit several machine learning capabilities, in an online real-time manner. To reduce the manual labeling effort required for learning during the disaster, we leverage active learning, complemented by dispatchers with specific domain-knowledge performing limited labeling. We also leverage federated learning across various public-safety departments with specialized knowledge to handle notifications related to their roles in a cooperative manner. We implement three different classifiers: for incident relevance, organization, and fine-grained role prediction. Each class is associated with a specific subset of the namespace graph. The novelty of our system is the integration of the namespace with federated active learning and inference procedures to identify and deliver vital SMPs to the right first responders in a distributed multi-organization environment, in real-time. Our experiments using real-world data, including tweets generated by citizens during the wildfires in California in 2018, show our approach outperforming both a simple keyword-based classification and several existing NLP-based classification techniques. 
    more » « less
  4. null (Ed.)
    Graph-based namespaces are being increasingly used to represent the organization of complex and ever-growing information eco-systems and individual user roles. Timely and accurate information dissemination requires an architecture with appropriate naming frameworks, adaptable to changing roles, focused on content rather than network addresses. Today's complex information organization structures make such dissemination very challenging. To address this, we propose POISE, a name-based publish/subscribe architecture for efficient topic-based and recipient-based content dissemination. POISE proposes an information layer, improving on state-of-the-art Information-Centric Networking solutions in two major ways: 1) support for complex graph-based namespaces, and 2) automatic name-based load-splitting. POISE supports in-network graph-based naming, leveraged in a dissemination protocol which exploits information layer rendezvous points (RPs) that perform name expansions. For improved robustness and scalability, POISE supports adaptive load-sharing via multiple RPs, each managing a dynamically chosen subset of the namespace graph. Excessive workload may cause one RP to turn into a ``hot spot'', impeding performance and reliability. To eliminate such traffic concentration, we propose an automated load-splitting mechanism, consisting of an enhanced, namespace graph partitioning complemented by a seamless, loss-less core migration procedure. Due to the nature of our graph partitioning and its complex objectives, off-the-shelf graph partitioning, e.g., METIS, is inadequate. We propose a hybrid, iterative bi-partitioning solution, consisting of an initial and a refinement phase. We also implemented POISE on a DPDK-based platform. Using the important application of emergency response, our experimental results show that POISE outperforms state-of-the-art solutions, demonstrating its effectiveness in timely delivery and load-sharing. 
    more » « less
  5. null (Ed.)
  6. null (Ed.)
    Delivering the right information to the right people in a timely manner can greatly improve outcomes and save lives in emergency response. A communication framework that flexibly and efficiently brings victims, volunteers, and first responders together for timely assistance can be very helpful. With the burden of more frequent and intense disaster situations and first responder resources stretched thin, people increasingly depend on social media for communicating vital information. This paper proposes ONSIDE, a framework for coordination of disaster response leveraging social media, integrating it with Information-Centric dissemination for timely and relevant dissemination. We use a graph-based pub/sub namespace that captures the complex hierarchy of the incident management roles. Regular citizens and volunteers using social media may not know of or have access to the full namespace. Thus, we utilize a social media engine (SME) to identify disaster-related social media posts and then automatically map them to the right name(s) in near-real-time. Using NLP and classification techniques, we direct the posts to appropriate first responder(s) that can help with the posted issue. A major challenge for classifying social media in real-time is the labeling effort for model training. Furthermore, as disasters hits, there may be not enough data points available for labeling, and there may be concept drift in the content of the posts over time. To address these issues, our SME employs stream-based active learning methods, adapting as social media posts come in. Preliminary evaluation results show the proposed solution can be effective. 
    more » « less
  7. null (Ed.)
    In many scenarios, information must be disseminated over intermittently-connected environments when network infrastructure becomes unavailable. Example scenarios include disasters in which first responders need to send updates about their critical tasks. If such updates pertain to a shared data set (e.g., pins on a map), their consistent dissemination is important. We can achieve this through causal ordering and consensus. Popular consensus algorithms, such as Paxos and Raft, are most suited for connected environments with reliable links. While some work has been done on designing consensus algorithms for intermittently-connected environments, such as the One-Third Rule (OTR) algorithm, there is need to improve their efficiency and timely completion. We propose CoNICE, a framework to ensure consistent dissemination of updates among users in intermittently-connected, infrastructure-less environments. It achieves efficiency by exploiting hierarchical namespaces for faster convergence, and lower communication overhead. CoNICE provides three levels of consistency to users' views, namely replication, causality and agreement. It uses epidemic propagation to provide adequate replication ratios, and optimizes and extends Vector Clocks to provide causality. To ensure agreement, CoNICE extends basic OTR to support long-term fragmentation and critical decision invalidation scenarios. We integrate the multilevel consistency schema of CoNICE, with a naming schema that follows a topic hierarchy-based dissemination framework, to improve functionality and performance. Performing city-scale simulation experiments, we demonstrate that CoNICE is effective in achieving its consistency goals, and is efficient and scalable in the time for convergence and utilized network resources. 
    more » « less
  8. With the increasing diversity of application needs (datacenters, IoT, content retrieval, industrial automation, etc.), new network architectures are continually being proposed to address specific and particular requirements. From a network management perspective, it is both important and challenging to enable evolution towards such new architectures. Given the ubiquity of the Internet, a clean-slate change of the entire infrastructure to a new architecture is impractical. It is believed that we will see new network architectures coming into existence with support for interoperability between separate architectural islands. We may have servers, and more importantly, content, residing in domains having different architectures. This paper presents COIN, a content-oriented interoperability framework for current and future Internet architectures. We seek to provide seamless connectivity and content accessibility across multiple of these network architectures, including the current Internet. COIN preserves each domain’s key architectural features and mechanisms while allowing flexibility for evolvability and extensibility. We focus on Information-Centric Networks (ICN), the prominent class of Future Internet architectures. COIN avoids expanding domain-specific protocols or namespaces. Instead, it uses an application-layer Object Resolution Service to deliver the right “foreign” names to consumers. COIN uses translation gateways that retain essential interoperability state, leverages encryption for confidentiality, and relies on domain-specific signatures to guarantee provenance and data integrity. Using NDN and MobilityFirst as important candidate solutions of ICN, and IP, we evaluate COIN. Measurements from an implementation of the gateways show that the overhead is manageable and scales well. 
    more » « less
  9. The Internet is composed of many interconnected, interoperating networks. With the recent advances in Future Internet design, multiple new network architectures, especially Information-Centric Networks (ICN) have emerged. Given the ubiquity of networks based on the Internet Protocol (IP), it is likely that we will have a number of different interconnecting network domains with different architectures, including ICNs. Their interoperability is important, but at the same time difficult to prove. A formal tool can be helpful for such analysis. ICNs have a number of unique characteristics, warranting formal analysis, establishing properties that go beyond, and are different from, what have been used in the state-of-the-art because ICN operates at the level of content names rather than node addresses. We need to focus on node-to-content reachability, rather than node-to-node reachability. In this paper, we present a formal approach to model and analyze information-centric interoperability (ICI). We use Alloy Analyzer’s model finding approach to verify properties expressed as invariants for information-centric services (both pull and push-based models) including content reachability and returnability. We extend our use of Alloy to model counting, to quantitatively analyze failure and mobility properties. We present a formally-verified ICI framework that allows for seamless interoperation among a multitude of network architectures. We also report on the impact of domain types, routing policies, and binding techniques on the probability of content reachability and returnability, under failures and mobility. 
    more » « less
  10. Named Data Networking (NDN) has a number of forwarding behaviors, strategies, and protocols proposed by researchers and incorporated into the codebase, to enable exploiting the full flexibility and functionality that NDN offers. This additional functionality introduces complexity, motivating the need for a tool to help reason about and verify that basic properties of an NDN data plane are guaranteed. This paper proposes Name Space Analysis (NSA), a network verification framework to model and analyze NDN data planes. NSA can take as input one or more snapshots, each representing a particular state of the data plane. It then provides the verification result against specified properties. NSA builds on the theory of Header Space Analysis, and extends it in a number of ways, e.g., supporting variable-sized headers with flexible formats, introduction of name space functions, and allowing for name-based properties such as content reachability and name leakage-freedom. These important additions reflect the behavior and requirements of NDN, requiring modeling and verification foundations fundamentally different from those of traditional host-centric networks. For example, in name-based networks (NDN), host-to-content reachability is required, whereas the focus in host-centric networks (IP) is limited to host-to-host reachability. We have implemented NSA and identified a number of optimizations to enhance the efficiency of verification. Results from our evaluations, using snapshots from various synthetic test cases and the real-world NDN testbed, show how NSA is effective, in finding errors pertaining to content reachability, loops, and name leakage, has good performance, and is scalable. 
    more » « less